Goto

Collaborating Authors

 hierarchical structure


Hierarchical Contrastive Learning for Multimodal Data

Li, Huichao, Yu, Junhan, Zhou, Doudou

arXiv.org Machine Learning

Multimodal representation learning is commonly built on a shared-private decomposition, treating latent information as either common to all modalities or specific to one. This binary view is often inadequate: many factors are shared by only subsets of modalities, and ignoring such partial sharing can over-align unrelated signals and obscure complementary information. We propose Hierarchical Contrastive Learning (HCL), a framework that learns globally shared, partially shared, and modality-specific representations within a unified model. HCL combines a hierarchical latent-variable formulation with structural sparsity and a structure-aware contrastive objective that aligns only modalities that genuinely share a latent factor. Under uncorrelated latent variables, we prove identifiability of the hierarchical decomposition, establish recovery guarantees for the loading matrices, and derive parameter estimation and excess-risk bounds for downstream prediction. Simulations show accurate recovery of hierarchical structure and effective selection of task-relevant components. On multimodal electronic health records, HCL yields more informative representations and consistently improves predictive performance.




Variational Temporal Abstraction

Taesup Kim, Sungjin Ahn, Yoshua Bengio

Neural Information Processing Systems

There have been approaches to learn such hierarchical structure in sequences such as the HMRNN (Chung et al., 2016). However, as a deterministic model, it has the main limitation that it cannot capture the stochastic nature prevailing in the data. In particular,this is acritical limitation to imagination-augmented agents because exploring various possible futures according to the uncertainty is what makes the imagination meaningful in many cases.




Novel positional encodings to enable tree-based transformers

Vighnesh Shiv, Chris Quirk

Neural Information Processing Systems

Motivated by this property, we propose a method to extend transformers to tree-structured data, enabling sequence-totree, tree-to-sequence, and tree-to-tree mappings. Our approach abstracts the transformer'ssinusoidal positional encodings, allowing ustoinstead useanovel positional encoding scheme to represent node positions within trees.